A generalization of the entropy power inequality with applications

نویسندگان

  • Ram Zamir
  • Meir Feder
چکیده

We prove the following generalization of the Entropy Power Inequality: h(Ax) h(A~ x) where h() denotes (joint-) diierential-entropy, x = x 1 : : : x n is a random vector with independent components, ~ x = ~ x 1 : : : ~ x n is a Gaussian vector with independent components such that h(~ x i) = h(x i), i = 1 : : : n, and A is any matrix. This generalization of the entropy-power inequality is applied to show that a non-Gaussian vector with independent components becomes \closer" to Gaussianity after a linear transformation, where the distance to Gaussianity is measured by the information divergence. Another application is a lower bound, greater than zero, for the mutual-information between non overlapping spectral components of a non-Gaussian white process. Finally, we describe a dual generalization of the Fisher Information Inequality. 0 1 The Generalization of the Entropy Power Inequality Consider the (joint-) diierential-entropy h(Ax), of a linear transformation y = Ax, where x = x 1 : : : x n is a vector and h(y) = Ef? log f(y)g (1) where we assume that y has a density f(). Throughout the manuscript log x = log 2 x and the entropy is measured in bits. Assume that dim A = m 0 n and RankA = m. In some cases, this entropy is easily calculated or bounded: 1. A is an invertible matrix (i.e., m 0 = m = n). In this case the lineartransformation just scales and shuues x, thus the entropy is only shifted, h(Ax) = h(x) + log jAj (2) where j j denotes (absolute value of) determinant. 2. A does not have a full row-rank (i.e., m 0 > m). In this case there is a deterministic relation between the components of y and thus h(Ax) = ?1 : (3) 3. x = x is a Gaussian vector. The linear transformation A preserves the normality and so h(Ax) = m 2 log(2ejAR x A t j 1 m) (4) where R x is the covariance matrix of x and AR x A t is the covariance matrix of y = Ax. Since for a given covariance, the Gaussian distribution maximizes the entropy, the expression in (4) upper bounds the entropy of y = Ax in the general case, i.e., h(Ax) h(Ax) (5) where x is now a Gaussian vector with the same covariance …

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

On the generalization of Trapezoid Inequality for functions of two variables with bounded variation and applications

In this paper, a generalization of trapezoid inequality for functions of two independent variables with bounded variation and some applications are given.

متن کامل

Some functional inequalities in variable exponent spaces with a more generalization of uniform continuity condition

‎Some functional inequalities‎ ‎in variable exponent Lebesgue spaces are presented‎. ‎The bi-weighted modular inequality with variable exponent $p(.)$ for the Hardy operator restricted to non‎- ‎increasing function which is‎‎$$‎‎int_0^infty (frac{1}{x}int_0^x f(t)dt)^{p(x)}v(x)dxleq‎‎Cint_0^infty f(x)^{p(x)}u(x)dx‎,‎$$‎ ‎is studied‎. ‎We show that the exponent $p(.)$ for which these modular ine...

متن کامل

An Extremal Inequality Motivated by the Vector Gaussian Broadcast Channel Problem

We prove a new extremal inequality, motivated by the vector Gaussian broadcast channel problem. As a corollary, this inequality yields a generalization of the classical vector entropy-power inequality (EPI). As another corollary, this inequality sheds insight into maximizing the differential entropy of the sum of two jointly distributed random variables.

متن کامل

Entropy of infinite systems and transformations

The Kolmogorov-Sinai entropy is a far reaching dynamical generalization of Shannon entropy of information systems. This entropy works perfectly for probability measure preserving (p.m.p.) transformations. However, it is not useful when there is no finite invariant measure. There are certain successful extensions of the notion of entropy to infinite measure spaces, or transformations with ...

متن کامل

A multivariate generalization of Costa's entropy power inequality

A simple multivariate version of Costa’s entropy power inequality is proved. In particular, it is shown that if independent white Gaussian noise is added to an arbitrary multivariate signal, the entropy power of the resulting random variable is a multidimensional concave function of the individual variances of the components of the signal. As a side result, we also give an expression for the He...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • IEEE Trans. Information Theory

دوره 39  شماره 

صفحات  -

تاریخ انتشار 1993